Goto

Collaborating Authors

 influence operation


Did artificial intelligence shape the 2024 US election?

Al Jazeera

Days after New Hampshire voters received a robocall with an artificially generated voice that resembled President Joe Biden's, the Federal Communications Commission banned the use of AI-generated voices in robocalls. The 2024 United States election would be the first to unfold amid wide public access to AI generators, which let people create images, audio and video – some for nefarious purposes. Institutions rushed to limit AI-enabled misdeeds. Sixteen states enacted legislation around AI's use in elections and campaigns; many of these states required disclaimers in synthetic media published close to an election. The Election Assistance Commission, a federal agency supporting election administrators, published an "AI toolkit" with tips election officials could use to communicate about elections in an age of fabricated information.


Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence

Corsi, Giulio, Kilian, Kyle, Mallah, Richard

arXiv.org Artificial Intelligence

The rapid advancement of artificial intelligence (AI) technologies presents profound challenges to societal safety. As AI systems become more capable, accessible, and integrated into critical services, the dual nature of their potential is increasingly clear. While AI can enhance defensive capabilities in areas like threat detection, risk assessment, and automated security operations (Hassanin and Moustafa, 2024), it also presents avenues for malicious exploitation and largescale societal harm, for example through automated influence operations and cyber attacks (Goldstein et al., 2023; Xu et al., 2024a). Understanding the dynamics that shape AI's capacity to both cause harm and enhance protective measures is essential for informed decision-making regarding the deployment, use, and integration of advanced AI systems. This paper builds on recent work on offense-defense dynamics within the realm of AI (Schneier, 2018; Garfinkel and Dafoe, 2021), proposing a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society. By establishing a shared terminology and conceptual foundation for analyzing these interactions, this work seeks to facilitate further research and discourse in this critical area.


Coordinated Reply Attacks in Influence Operations: Characterization and Detection

Pote, Manita, Elmas, Tuğrulcan, Flammini, Alessandro, Menczer, Filippo

arXiv.org Artificial Intelligence

Coordinated reply attacks are a tactic observed in online influence operations and other coordinated campaigns to support or harass targeted individuals, or influence them or their followers. Despite its potential to influence the public, past studies have yet to analyze or provide a methodology to detect this tactic. In this study, we characterize coordinated reply attacks in the context of influence operations on Twitter. Our analysis reveals that the primary targets of these attacks are influential people such as journalists, news media, state officials, and politicians. We propose two supervised machine-learning models, one to classify tweets to determine whether they are targeted by a reply attack, and one to classify accounts that reply to a targeted tweet to determine whether they are part of a coordinated attack. The classifiers achieve AUC scores of 0.88 and 0.97, respectively. These results indicate that accounts involved in reply attacks can be detected, and the targeted accounts themselves can serve as sensors for influence operation detection.


The Download: AI propaganda, and digital twins

MIT Technology Review

Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality. At the end of May, OpenAI marked a new "first" in its corporate history. It wasn't an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists--including players from Russia, China, Iran, and Israel--using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output.


Propagandists are using AI too--and companies need to be open about it

MIT Technology Review

At the end of May, OpenAI marked a new "first" in its corporate history. It wasn't an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists--including players from Russia, China, Iran, and Israel--using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.


OpenAI says it stopped multiple covert influence operations that abused its AI models

Engadget

OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. "As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services," OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors. OpenAI's report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.


OpenAI says Russian and Israeli groups used its tools to spread disinformation

The Guardian

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran. Malicious actors used the company's generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report. As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.


OpenAI Says Russia, China, and Israel Are Using Its Tools for Foreign Influence Campaigns

TIME - Tech

OpenAI identified and removed five covert influence operations based in Russia, China, Iran and Israel that were using its artificial intelligence tools to manipulate public opinion, the company said on Thursday. In a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used the company's tools for a variety of "deceptive activities." These included generating social media comments, articles, and images in multiple languages, creating names and biographies for fake accounts, debugging code, and translating and proofreading texts. These networks focused on a range of issues, including defending the war in Gaza and Russia's invasion of Ukraine, criticizing Chinese dissidents, and commenting on politics in India, Europe, and the U.S. in their attempts to sway public opinion. While these influence operations targeted a wide range of online platforms, including X (formerly known as Twitter), Telegram, Facebook, Medium, Blogspot, and other sites, "none managed to engage a substantial audience" according to OpenAI analysts.


China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the U.S., Microsoft Reports

TIME - Tech

Faking a political endorsement in Taiwan ahead of its crucial January election, sharing memes to amplify outrage over Japan's disposal of nuclear wastewater, and spreading conspiracy theories that claim the U.S. government was behind Hawaii's wildfire and Kentucky's train derailment last year. These are just some of the ways that China's influence operations have ramped up their use of artificial intelligence to sow disinformation and stoke discord worldwide over the last seven months, according to a new report released Friday by Microsoft Threat Intelligence. Microsoft has observed notable trends from state-backed actors, the report said, "that demonstrate not only doubling down on familiar targets, but also attempts to use more sophisticated influence techniques to achieve their goals." In particular, Chinese influence actors "experimented with new media" and "continued to refine AI-generated or AI-enhanced content." Among the operations highlighted in the report was a "a notable uptick in content featuring Taiwanese political figures ahead of the January 13 presidential and legislative elections."


China turns to AI in propaganda mocking the 'American Dream'

Al Jazeera

They say it's for all, but is it really?" So begins a 65-second, AI-generated animated video that touches on hot-button issues in the United States ranging from drug addiction and imprisonment rates to growing wealth inequality. As storm clouds gather over an urban landscape resembling New York City, the words "AMERICAN DREAM" hang in a darkening sky as the video ends. The message is clear: Despite its promises of a better life for all, the United States is in terminal decline. The video, titled American Dream or American Mirage, is one of a number of segments aired by Chinese state broadcaster CGTN – and shared far and wide on social media – as part of its A Fractured America animated series. Other videos in the series contain similar titles that invoke images of a dystopian society, such as American workers in tumult: A result of unbalanced politics and economy, and Unmasking the real threat: America's military-industrial complex. CGTN and the Chinese embassy in Washington, DC did not respond to requests for comment. The Fractured America series is just one example of how artificial intelligence (AI), with its ability to generate high-quality multimedia with minimal effort in seconds, is beginning to shape Beijing's propaganda efforts to undermine the United States' standing in the world. Henry Ajder, a UK-based expert in generative AI, said while the CGTN series does not attempt to pass itself off as genuine video, it is a clear example of how AI has made it far easier and cheaper to churn out content. "The reason that they've done it in this way is, you could hire an animator, and a voiceover artist to do this, but it would probably end up being more time-consuming.